Human-Centric Machine Learning
NeurIPS 2019 Workshop, Vancouver
Friday, 13 December 2019 -- West Level 2, Room 223-224
Overview
Machine learning (ML) tools are increasingly employed to inform and automate consequential decisions for humans, in areas such as criminal justice, medicine, employment, welfare programs, and beyond. ML has already established its tremendous potential to not only improve the accuracy and cost-efficiency of such decisions but also minimize the impact of certain human biases and prejudices. The technology, however, comes with significant challenges, risks, and potential harms. Examples include (but are not limited to) exacerbating discrimination against historically disadvantaged social groups, threatening democracy, and violating people's privacy. This workshop aims to bring together experts from a diverse set of backgrounds (ML, human-computer interaction, psychology, sociology, ethics, law, and beyond) to better understand the risks and burdens of big data technologies on society, and identify approaches and best practices to maximize the societal benefits of Machine Learning.
The workshop takes a broad perspective on Human-centric ML and addresses a wide range of challenges from diverse, multi-disciplinary viewpoints. We strongly believe that for society to trust and accept the ML technology, we need to ensure the interpretability and fairness of data-driven decisions. We must have reliable mechanisms to guarantee the privacy and security of people's data. We should demand transparency, not just in terms of the disclosure of algorithms, but also in terms of how they are used and for what purposes. And last but not least, we need to have a modern legal framework to provide accountability and allow subjects to dispute and overturn algorithmic decisions when warranted. The workshop particularly encourages papers that take a multi-disciplinary approach to tackle the above challenges.
One of the main goals of this workshop is to help the community understand where it stands after a few years of rapid development and identify promising research directions to pursue in the years to come. We, therefore, encourage authors to think carefully about the practical implications of their work, identify directions for future work, and discuss the challenges ahead.
This workshop is part of the ELLIS “Human-centric Machine Learning” program.
Call for papers and important dates
Topics of interest include but are not limited to:
- Fairness: algorithmic fairness, human perceptions of fairness, cultural dependencies
- Transparency & Interpretability: interpretable algorithms, explanations of ML systems, human usability of explanation methods
- Privacy: relationships between fairness, security, and privacy, alignment between mathematical privacy and people’s perception of privacy
- Accountability & Governance: existing legal frameworks, how do state-of-the-art technologies for fairness and interpretability comply with regulation such as GDPR, governance examples for human-centric ML decision-making
We accept submissions in the form of extended abstracts. Submission must adhere to the NeurIPS format and be limited to 4 pages, including figures and tables. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material.
We accept new submissions, submissions currently under review at another venue, as well as papers that have been accepted elsewhere in an indexed journal or conference earlier this year. Such recently accepted papers must still adhere to the above formatting instructions. In particular they are also limited to 4 pages (not including references and supplementary material). All papers must be anonymized for double-blind reviewing as described in the submission instructions and submitted via EasyChair.
The workshop will not have formal proceedings, but accepted papers will be posted on the workshop website. We emphasize that the workshop is non-archival, so authors can later publish their work in archival venues. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers).
Submission deadline: 15 Sep 2019, 23:59 Anywhere on Earth (AoE)
Author notification: 30 Sep 2019, 23:59 Anywhere on Earth (AoE)
Camera-ready deadline: 30 Oct 2019, 23:59 Anywhere on Earth (AoE) -- Please use this style file for the camera-ready version.
Invited Speakers
- Aaron Roth (UPenn)
- Alexandra Chouldechova (CMU)
- Been Kim (Google Brain)
- Deirdre Mulligan (Berkeley)
- Finale Doshi-Velez (Harvard)
- Krishna Gummadi (MPI-SWS)
Schedule
Workshop Schedule
08:30 - 08:45 Welcome and introduction
08:45 - 09:15 Krishna Gummadi (invited talk)
09:15 - 10:00 Contributed talks: Fairness and predictions
• "Learning Representations by Humans, for Humans." Sophie Hilgard, Nir Rosenfeld, Mahzarin Banaji, Jack Cao and David Parkes
• "On the Multiplicity of Predictions in Classification." Charles Marx, Flavio Calmon and Berk Ustun.
• "On the Fairness of Time-Critical Influence Maximization in Social Networks." Junaid Ali, Mahmoudreza Babaei, Abhijnan Chakraborty, Baharan Mirzasoleiman, Krishna P. Gummadi and Adish Singla
10:00 - 10:30 Panel discussion: On the role of industry, academia, and government in developing HCML
10:30 - 11:00 Coffee break
11:00 - 11:30 Deirdre Mulligan (invited talk)
11:30 - 12:00 Contributed talks: Law and Philosophy
- "On the Legal Compatibility of Fairness Definitions." Alice Xiang and Inioluwa Raji
- "Methodological Blind Spots in Machine Learning Fairness: Lessons from the Philosophy of Science and Computer Science." Samuel Deng and Achille Varzi
12:00 - 13:30 Lunch and poster session
13:30 - 14:00 Aaron Roth (invited talk)
14:00 - 15:00 Contributed talks: Interpretability
- "Interpretable and Differentially Private Predictions." Frederik Harder, Matthias Bauer and Mijung Park
- "Benchmarking Attribution Methods with Ground Truth." Mengjiao Yang and Been Kim
- "A study of data and label shift in the LIME framework." Amir Hossein Akhavan Rahnama and Henrik Boström
- "bLIMEy: Surrogate Prediction Explanations Beyond LIME." Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez and Peter Flach
15:00 - 15:30 Coffee break
15:30 - 16:00 Finale Doshi-Velez (invited talk)
16:00 - 16:30 Been Kim (invited talk)
16:30 - 17:00 Panel discussion: Future research directions and interdisciplinary collaborations in HCML
17:00 - 18:00 Poster session
18:00 - 18:15 Closing remarks
Accepted Papers
Links to camera-ready versions will be added after 30 October 2019.
- Learning Fair and Transferable Representations -- Luca Oneto, Michele Donini, Massimiliano Pontil and Andreas Maurer
- Optimal Decision Making Under Strategic Behavior -- Moein Khajehnejad, Behzad Tabibian, Bernhard Schoelkopf, Adish Singla and Manuel Gomez Rodriguez
- Saliency Methods for Explaining Adversarial Attacks -- Jindong Gu and Volker Tresp
- Privacy Enhanced Multimodal Neural Representations for Emotion Recognition -- Mimansa Jaiswal and Emily Mower Provost
- Do Machine Teachers Dream of Algorithms? -- Gonzalo Ramos, Christopher Meek, Jina Suh, Soroush Ghorashi, Felicia Ng and Nicole Sultanum
- On the Fairness of Time-Critical Influence Maximization in Social Networks -- Junaid Ali, Mahmoudreza Babaei, Abhijnan Chakraborty, Baharan Mirzasoleiman, Krishna P. Gummadi and Adish Singla
- Regression Under Human Assistance -- Abir De, Paramita Koley, Niloy Ganguly and Manuel Gomez Rodriguez
- Explainable Machine Learning in Deployment -- Umang Bhatt, Alice Xiang, Adrian Weller and Peter Eckersley
- On the Legal Compatibility of Fairness Definitions -- Alice Xiang and Inioluwa Raji
- Weight of Evidence as a Basis for Human-Oriented Explanations -- David Alvarez-Melis, Hal Daume, Jennifer Wortman Vaughan and Hanna Wallach
- An Active Approach for Model Interpretation -- Jialin Lu and Martin Ester
- ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles -- Jingying Yang and Deborah Raji
- MonoNet: Towards Interpretable Models by Learning Monotonic Features -- An-Phi Nguyen and María Rodríguez Martínez
- Fair Generative Modeling via Weak Supervision -- Aditya Grover, Kristy Choi, Rui Shu and Stefano Ermon
- Towards User Empowerment -- Martin Pawelczyk, Johannes Haug, Klaus Broelemann and Gjergji Kasneci
- Mathematical decisions and non-causal elements of explainable AI -- Atoosa Kasirzadeh
- Incremental Fairness in Two-Sided Market Platforms: On Updating Recommendations Fairly -- Gourab K Patro, Abhijnan Chakraborty, Niloy Ganguly and Krishna P. Gummadi
- Interactive Image Restoration -- Zhiwei Han, Thomas Weber, Stefan Matthes, Yuanting Liu and Hao Shen
- Learning Representations by Humans, for Humans -- Sophie Hilgard, Nir Rosenfeld, Mahzarin Banaji, Jack Cao and David Parkes
- Probabilistic Bias Mitigation in Word Embeddings -- Hailey James-Sorenson and David Alvarez Melis
- What Is a Proxy and Why Is It a Problem? -- Margarita Boyarskaya, Solon Barocas and Hanna Wallach
- On the Multiplicity of Predictions in Classification -- Charles Marx, Flavio Calmon and Berk Ustun
- Benchmarking Attribution Methods with Ground Truth -- Mengjiao Yang and Been Kim
- Fairness Warnings -- Dylan Slack, Sorelle Friedler and Emile Givental
- Fair Meta-Learning: Learning How to Learn Fairly -- Dylan Slack, Sorelle Friedler and Emile Givental
- Prediction Focused Topic Models Via Vocab Filtering -- Jason Ren, Russell Kunes and Finale Doshi-Velez
- Assessing the Local Interpretability of Machine Learning Models -- Dylan Slack, Sorelle Friedler, Carlos Scheidegger and Chitradeep Dutta Roy
- Stretching the Effectiveness of MLE from Accuracy to Bias for Pairwise Comparisons -- Jingyan Wang, Nihar Shah and R Ravi
- Methodological Blind Spots in Machine Learning Fairness: Lessons from the Philosophy of Science and Computer Science -- Samuel Deng and Achille Varzi
- Interpretable and Differentially Private Predictions -- Frederik Harder, Matthias Bauer and Mijung Park
- On the Unintended Social Bias of Training Language Generation Models with Data from Local Media -- Omar U Florez
- Towards Fair Personalization by Avoiding Feedback Loops -- Gokhan Capan, Ozge Bozal, Ilker Gundogdu and Ali Taylan Cemgil
- Learning Fairness Metrics with Human Supervision -- Hanchen Wang, Nina Grgić-Hlača, Preethi Lahoti, Krishina Gummadi and Adrian Weller
- Oblivious Data -- Steffen Grunewalder and Azadeh Khaleghi
- DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning -- Michiel Bakker, Duy Patrick Tu, Humberto Riveron Valdes, Krishna Gummadi, Kush Varshney, Adrian Weller and Alex Pentland
- A study of data and label shift in the LIME framework -- Amir Hossein Akhavan Rahnama and Henrik Boström
- bLIMEy: Surrogate Prediction Explanations Beyond LIME -- Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez and Peter Flach
Sponsors
Program Committee
- Alison Smith-Renner
- Amir Karimi
- Ana Freire
- Berk Ustun
- Bernhard Kainz
- Borja Balle
- Chaofan Chen
- Diego Fioravanti
- Emilia Gomez
- Emmanuel Letouzé
- Eric Wong
- Francesca Toni
- Francesco Fabbri
- Gagan Bansal
- Gal Yona
- Gonzalo Ramos
- John Shawe-Taylor
- Julius Adebayo
- Krishnamurthy Dvijotham
- Lydia Liu
- Mark Riedl
- Matthew Kusner
- Melanie F. Pradier
- Michael Hind
- Mijung Park
- Min Wu
- Nina Grgic-Hlaca
- Novi Quadrianto
- Nozha Boujemaa
- Ricardo Silva
- Rohin Shah
- Sarah Tan
- Sergio Escalera
- Vaishak Belle
- Vlad Estivill-Castro
- Xiaowei Gu
- Xiaowei Huang
Organizers
- Plamen Angelov (Lancaster University)
- Silvia Chiappa (DeepMind)
- Manuel Gomez Rodriguez (Max Planck Institute for Software Systems)
- Hoda Heidari (ETH Zürich)
- Niki Kilbertus (MPI for Intelligent Systems, University of Cambridge)
- Nuria Oliver (Data-Pop Alliance, Vodafone Institute)
- Isabel Valera (Max Planck Institute for Intelligent Systems)
- Adrian Weller (The Alan Turing Institute, University of Cambridge)
Related workshops @ NeurIPS 2019
- Privacy in Machine Learning (PriML)
- Minding the Gap: Between Fairness and Ethics
- “Do the right thing”: machine learning and causal inference for improved decision making
- Joint Workshop on AI for Social Good
- Workshop on Federated Learning for Data Privacy and Confidentiality
- Fair ML in Healthcare
- AI for Humanitarian Assistance and Disaster Response
- Safety and Robustness in Decision-making
- Robust AI in Financial Services: Data, Fairness, Explainability, Trustworthiness, and Privacy